15 research outputs found

    Antifragile Control Systems: The case of an oscillator-based network model of urban road traffic dynamics

    Full text link
    Existing traffic control systems only possess a local perspective over the multiple scales of traffic evolution, namely the intersection level, the corridor level, and the region level respectively. But luckily, despite its complex mechanics, traffic is described by various periodic phenomena. Workday flow distributions in the morning and evening commuting times can be exploited to make traffic adaptive and robust to disruptions. Additionally, controlling traffic is also based on a periodic process, choosing the phase of green time to allocate to opposite directions right of the pass and complementary red time phase for adjacent directions. In our work, we consider a novel system for road traffic control based on a network of interacting oscillators. Such a model has the advantage to capture temporal and spatial interactions of traffic light phasing as well as the network-level evolution of the traffic macroscopic features (i.e. flow, density). In this study, we propose a new realization of the antifragile control framework to control a network of interacting oscillator-based traffic light models to achieve region-level flow optimization. We demonstrate that antifragile control can capture the volatility of the urban road environment and the uncertainty about the distribution of the disruptions that can occur. We complement our control-theoretic design and analysis with experiments on a real-world setup comparatively discussing the benefits of an antifragile design for traffic control

    Role of Kinematics Assessment and Multimodal Sensorimotor Training for Motion Deficits in Breast Cancer Chemotherapy-Induced Polyneuropathy: A Perspective on Virtual Reality Avatars

    Get PDF
    Chemotherapy-induced polyneuropathy (CIPN), one of the most severe and incapacitating side effects of chemotherapeutic drugs, is a serious concern in breast cancer therapy leading to dose diminution, delay, or cessation. The reversibility of CIPN is of increasing importance since active chemotherapies prolong survival. Clinical assessment tools show that patients experiencing sensorimotor CIPN symptoms not only do they have to cope with loss in autonomy and life quality, but CIPN has become a key restricting factor in treatment. CIPN incidence poses a clinical challenge and has lacked established and efficient therapeutic options up to now. Complementary, non-opioid therapies are sought for both prevention and management of CIPN. In this perspective, we explore the potential that digital interventions have for sensorimotor CIPN rehabilitation in breast cancer patients. Our primary goal is to emphasize the benefits and impact that Virtual Reality (VR) avatars and Machine Learning have in combination in a digital intervention aiming at (1) assessing the complete kinematics of deficits through learning underlying patient sensorimotor parameters, and (2) parameterize a multimodal VR simulation to drive personalized deficit compensation. We support our perspective by evaluating sensorimotor effects of chemotherapy, the metrics to assess sensorimotor deficits, and relevant clinical studies. We subsequently analyse the neurological substrate of VR sensorimotor rehabilitation, with multisensory integration acting as a key element. Finally, we propose a closed-loop patient-centered design recommendation for CIPN sensorimotor rehabilitation. Our aim is to provoke the scientific community toward the development and use of such digital interventions for more efficient and targeted rehabilitation

    Reinforcement learning estimates muscle activations

    Get PDF
    A digital twin of the human neuromuscular system can substantially improve the prediction of injury risks and the evaluation of the readiness to return to sport. Reinforcement learning (RL) algorithms already learn physical quantities unmeasurable in biomechanics, and hence can contribute to the development of the digital twin. Our preliminary results confirm the potential of RL algorithms to estimate the muscle activations of an athlete’s moves.Ein digitaler Zwilling des menschlichen neuromuskulären Systems kann die Vorhersage von Verletzungsrisiken und die Bewertung der Bereitschaft zur Rückkehr in den Sport erheblich verbessern. Algorithmen des bestärkenden Lernens (Reinforcement Learning, RL) lernen bereits physikalische Größen, die in der Biomechanik nicht messbar sind, und können daher zur Entwicklung des digitalen Zwillings beitragen. Unsere vorläufigen Ergebnisse bestätigen das Potenzial von RL-Algorithmen zur Schätzung der Muskelaktivierung bei den Bewegungen eines Sportlers

    Recipes for calibration and validation of agent-based models in cancer biomedicine

    Full text link
    Computational models and simulations are not just appealing because of their intrinsic characteristics across spatiotemporal scales, scalability, and predictive power, but also because the set of problems in cancer biomedicine that can be addressed computationally exceeds the set of those amenable to analytical solutions. Agent-based models and simulations are especially interesting candidates among computational modelling strategies in cancer research due to their capabilities to replicate realistic local and global interaction dynamics at a convenient and relevant scale. Yet, the absence of methods to validate the consistency of the results across scales can hinder adoption by turning fine-tuned models into black boxes. This review compiles relevant literature to explore strategies to leverage high-fidelity simulations of multi-scale, or multi-level, cancer models with a focus on validation approached as simulation calibration. We argue that simulation calibration goes beyond parameter optimization by embedding informative priors to generate plausible parameter configurations across multiple dimensions

    From Adaptive Reasoning to Cognitive Factory: Bringing Cognitive Intelligence to Manufacturing Technology

    Get PDF
    There are two important aspects that will play important roles in future manufacturing systems: changeability and human-machine collaboration. The first aspect, changeability, concerns with the ability of production tools to reconfigure themselves to the new manufacturing settings, possibly with unknown prior information, while maintaining their reliability at lowest cost. The second aspect, human-machine collaboration, emphasizes the ability of production tools to put themselves on the position as humans’ co-workers. The interplay between these two aspects will not only determine the economical accomplishment of a manufacturing process, but it will also shape the future of the technology itself. To address this future challenge of manufacturing systems, the concept of Cognitive Factory was proposed. Along this line, machines and processes are equipped with cognitive capabilities in order to allow them to assess and increase their scope of operation autonomously. However, the technical implementation of such a concept is still widely open for research, since there are several stumbling blocks that limit practicality of the proposed methods. In this paper, we introduce our method to achieve the goal of the Cognitive Factory. Our method is inspired by the working mechanisms of a human’s brain; it works by harnessing the reasoning capabilities of cognitive architecture. By utilizing such an adaptive reasoning mechanism, we envision the future manufacturing systems with cognitive intelligence. We provide illustrative examples from our current research work to demonstrate that our proposed method is notable to address the primary issues of the Cognitive Factory: changeability and human-machine collaboration

    Status and recommendations of technological and data-driven innovations in cancer care:Focus group study

    Get PDF
    Background: The status of the data-driven management of cancer care as well as the challenges, opportunities, and recommendations aimed at accelerating the rate of progress in this field are topics of great interest. Two international workshops, one conducted in June 2019 in Cordoba, Spain, and one in October 2019 in Athens, Greece, were organized by four Horizon 2020 (H2020) European Union (EU)-funded projects: BOUNCE, CATCH ITN, DESIREE, and MyPal. The issues covered included patient engagement, knowledge and data-driven decision support systems, patient journey, rehabilitation, personalized diagnosis, trust, assessment of guidelines, and interoperability of information and communication technology (ICT) platforms. A series of recommendations was provided as the complex landscape of data-driven technical innovation in cancer care was portrayed. Objective: This study aims to provide information on the current state of the art of technology and data-driven innovations for the management of cancer care through the work of four EU H2020-funded projects. Methods: Two international workshops on ICT in the management of cancer care were held, and several topics were identified through discussion among the participants. A focus group was formulated after the second workshop, in which the status of technological and data-driven cancer management as well as the challenges, opportunities, and recommendations in this area were collected and analyzed. Results: Technical and data-driven innovations provide promising tools for the management of cancer care. However, several challenges must be successfully addressed, such as patient engagement, interoperability of ICT-based systems, knowledge management, and trust. This paper analyzes these challenges, which can be opportunities for further research and practical implementation and can provide practical recommendations for future work. Conclusions: Technology and data-driven innovations are becoming an integral part of cancer care management. In this process, specific challenges need to be addressed, such as increasing trust and engaging the whole stakeholder ecosystem, to fully benefit from these innovations

    A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics

    No full text
    Biological and technical systems operate in a rich multimodal environment. Due to the diversity of incoming sensory streams a system perceives and the variety of motor capabilities a system exhibits there is no single representation and no singular unambiguous interpretation of such a complex scene. In this work we propose a novel sensory processing architecture, inspired by the distributed macro-architecture of the mammalian cortex. The underlying computation is performed by a network of computational maps, each representing a different sensory quantity. All the different sensory streams enter the system through multiple parallel channels. The system autonomously associates and combines them into a coherent representation, given incoming observations. These processes are adaptive and involve learning. The proposed framework introduces mechanisms for self-creation and learning of the functional relations between the computational maps, encoding sensorimotor streams, directly from the data. Its intrinsic scalability, parallelisation, and automatic adaptation to unforeseen sensory perturbations make our approach a promising candidate for robust multisensory fusion in robotic systems. We demonstrate this by applying our model to a 3D motion estimation on a quadrotor

    A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics

    No full text
    Biological and technical systems operate in a rich multimodal environment. Due to the diversity of incoming sensory streams a system perceives and the variety of motor capabilities a system exhibits there is no single representation and no singular unambiguous interpretation of such a complex scene. In this work we propose a novel sensory processing architecture, inspired by the distributed macro-architecture of the mammalian cortex. The underlying computation is performed by a network of computational maps, each representing a different sensory quantity. All the different sensory streams enter the system through multiple parallel channels. The system autonomously associates and combines them into a coherent representation, given incoming observations. These processes are adaptive and involve learning. The proposed framework introduces mechanisms for self-creation and learning of the functional relations between the computational maps, encoding sensorimotor streams, directly from the data. Its intrinsic scalability, parallelisation, and automatic adaptation to unforeseen sensory perturbations make our approach a promising candidate for robust multisensory fusion in robotic systems. We demonstrate this by applying our model to a 3D motion estimation on a quadrotor
    corecore